Working with both GLSL and HLSL can be daunting, as both languages are close to each other, but not quite. nkGraphics offers capabilities to help ease the pain when dealing with cross platform development. Programs offer the ability to define keywords that will automatically translate correctly in both languages. This can be enabled through :
However, this system has its limits and this section will provide advice to help when developping on many platforms !
This section lists the pitfalls to avoid when working with cross-renderer compatibility in mind.
Declaring vectors can be done in many ways in both languages. However, one way in HLSL could not work within GLSL. For instance, this line is correct in HLSL but GLSL will choke on it :
GLSL will require this line to be written as :
But HLSL won't like it.
To be able to write code that will work in both, it is necessary to then use their common explicit constructor, leading to :
This is true for any vector type : nkFloatX, nkIntX, nkUintX...
During rendering, OpenGL and DirectX needs to know how to map pixels to the screen.
DirectX chooses to consider the top left of it as the origin.
OpenGL chooses to consider the bottom left of it as the origin.
While fine in their own context, this leads to inconsistencies when having to work with both of them at the same time.
Nilkins was first started using DirectX and as such, will await resources with origins considered as top left.
To cope with OpenGL's reference, rendering occurs "upside down" compared to what it expects.
This means that internally, the projection matrix's Y axis will be flipped.
Front culling winding order will also be inverted to account for this change.
Finally, just before swapping, Nilkins will flip back the texture before presenting it to the screen.
This occurs automatically, and client code should not have to worry about this.
The only drawback for this feature is that there is a small price to pay to flip the texture back when using a screen context.
However, this price should be small enough on current hardware.
Note that this operations only occurs when swapping, meaning that offscreen contexts, or rendering without swapping, won't be concerned.
As a rule of thumb :
However, this can lead to a small discrepancy in a program. Consider this line :
In HLSL, the projection matrix won't flip anything while GLSL's one will. Thus, to correctly map both positions together, it will be necessary to flip one of them :
Another difference between OpenGL and DirectX is that NDC's Z coordinates should end up between [-1, 1] in OpenGL, and [0, 1] in DirectX.
Of course, Nilkins handles that for you by reworking the projection matrix. However, this is something to keep in mind while writing code that depends on projected depth.
A simple linear remapping should do the trick.
Taking back last example :
Ensuring that both HLSL and GLSL depths are correctly mapped between [0, 1], making them usable without distinction in following instructions.
One big difference in both graphics API is that a sampler is a specific object within DirectX, but not in OpenGL.
OpenGL mixes the sampler within the texture information.
This means that to have a texture and be able to sample it, you will need both a texture and a sampler in HLSL, while only a "sampler" in GLSL.
Nilkins' API separates textures and samplers, and OpenGL's way does not map correctly with this. As such, there is a specific behavior to keep in mind :
To understand better what this implies, consider this short program bit :
We add two textures to slots 0 and 1, and two samplers to slots 0 and 1. Let's see how this would translate to be usable in HLSL :
We have two textures, two samplers.
When sampling a texture, it is easy to swap between one or another sampler.
Let's see how we would need to write this in GLSL :
And that's it, no way to specify the sampler object in itself. Implication is that it is impossible to switch, within GLSL, between using one sampler or another when sampling a texture.
Now, let's focus on how Nilkins works with that.
In DirectX, mapping is right how it expects it : both textures are mapped to their slots, along with both samplers.
In OpenGL, both textures are linked to their slots, which are common with the samplers slots.
Considering example above, OpenGL then gets a mix within its own sampler2D.
First is (t0, s0).
Second is (t1, s1).
If you want to sample t0 with s1, you're out of luck.
Sadly, there is no way within OpenGL's API to address this. As such, to write cross-renderer code, code has to be reworked a bit.
So let's say we want to write code working with both APIs the same way.
We need to sample t0 with s0 and s1, and t1 with s1.
Code could become, for our GLSL / HLSL :
Which would require to change the program setup to :
Let's break the logic here, per graphics API.
OpenGL :
Effectively making this code work correctly with both renderer types. The cost is to have to take this into account in both cpp and HLSL / GLSL, sometimes linking more resources than necessary for one given language. Even if not totally clean per-say, this approach allows to keep the codebase common for both graphics API.
This section will breakdown the creation of one program and show how Nilkins can be used to write code compatible with both GLSL and HLSL.
Aim of the cross compiler defines is to make most of the code shareable. In practice, this works well, and bits that can easily be put in common include :
The hardest part to port currently are those not mapping to an easy keyword that could identify them all. This can be :
To understand better, here is a full snippet of a vertex stage :
On this simple example, a lot of code has to be differentiated between GLSL and HLSL. However, we are able to put in common some of it, like the constant buffer for instance. By defining custom defines within the program itself, it should become possible to put even more code in common. But those defines will depend on your own code and which variables will need to be set / sent.
Let's switch to the pixel shader, pushing a bit the idea of custom defines :
This demonstrates better how processing can easily be common to both languages : function declaration is the same, along with all the processing done on the color. In more complex programs where processing is important, this helps in lowering maintenance cost, by ensuring you don't have to correct or alter two different codepath.